Goto

Collaborating Authors

 iyad rahwan


The moral machine: Who lives, who dies, you decide!

#artificialintelligence

Inevitably, you might find yourself stuck in a life-threatening situation where your car won't be able to stop in time to avoid a collision. It has a choice--either collide with one of the other vehicles endangering another passenger's life or put your life in harm's way. What do you think it would do? If we were driving a car in manual mode, whichever way we chose, it would be considered a reaction to the situation as opposed to a deliberate decision--an instinctual, potentially panicked reaction with no forethought or malice. However, if a programmer were to instruct the car to take the same call in a life-threatening situation, it could be interpreted as a premeditated homicide.



An AI-machine learning data challenge: Predicting the unpredictable

#artificialintelligence

AI and machine learning hold enormous promise, scholars at the MIT Sloan CIO Symposium stressed, with advances... You forgot to provide an Email Address. This email address doesn't appear to be valid. This email address is already registered. You have exceeded the maximum character limit.


AI algorithm with 'social skills' teaches humans how to collaborate

#artificialintelligence

An international team has developed an AI algorithm with social skills that has outperformed humans in the ability to cooperate with people and machines in playing a variety of two-player games. The researchers, led by Iyad Rahwan, PhD, an MIT Associate Professor of Media Arts and Sciences, tested humans and the algorithm, called S# ("S sharp"), in three types of interactions: machine-machine, human-machine, and human-human. In most instances, machines programmed with S# outperformed humans in finding compromises that benefit both parties. "Two humans, if they were honest with each other and loyal, would have done as well as two machines," said lead author BYU computer science professor Jacob Crandall. "As it is, about half of the humans lied at some point. So essentially, this particular algorithm is learning that moral characteristics are better [since it's programmed to not lie] and it also learns to maintain cooperation once it emerges."


Ethical dilemma on four wheels: How to decide when your self-driving car should kill you

Los Angeles Times

Self-driving cars have a lot of learning to do before they can replace the roughly 250 million vehicles on U.S. roads today. They need to know how to navigate when their pre-programmed maps are out of date. They need to know how to visualize the lane dividers on a street that's covered with snow. And, if the situation arises, they'll need to know whether it's better to mow down a group of pedestrians or spare their lives by steering off the road, killing all passengers onboard. Once self-driving cars are logging serious miles, they're sure to find themselves in situations where an accident is unavoidable.